Kalman model

What is Kalman Model/Filter

When the hidden variables of a Markov Chain are continuous and probability distributions are Gaussians, then the model is usually called a Kalman model.

How does it work

Kalman filters lie in the linear dynamical system

st=Dst1+wt1

where D is a scalar that models how the estimates changes over time, and wtN(0,σp2) is white Gaussian noise.

Thus, a Kalman filter estimates a posterior probability distribution recursively over time using a mathematical model of the process and incoming measurements.

The posterior mean can be expressed as:

μ¯t=(1K)Dμ¯t1+Kmt=Dμ¯t1+K(mtDμ¯t1)

where K is known as the Kalman gain and it is a function of t. m denotes measurements.

Implement Kalman filter with the sum rule and product rule for Gaussians

Step 1: Change yesterday's posterior into today's prior
Use the mathematical model to calculate how deterministic changes in the process shift yesterday's posterior, N(μst1,σst12), and how random changes in the process broaden the shifted distribution:

p(st|m1:t1)=p(Dst1+wt1|m1:t1)=N(Dμst1+0,D2σst12+σp2)

where σp denotes process noise and m denotes measurements.

Step 2: Multiply today's prior by likelihood

Use the latest measurement to form a new estimate somewhere between this measurement and what we predicted in Step 1. The next posterior is the result of multiplying the Gaussian computed in Step 1 (a.k.a. today's prior) and the likelihood, which is also modeled as a Gaussian N(mt,σm2):

2a: add information from prior and likelihood

To find the posterior variance, we first compute the posterior information (which is the inverse of the variance) by adding the information provided by the prior and the likelihood:

1σst2=1D2σst12+σp2+1σm2

Now we can take the inverse of the posterior information to get back the posterior variance.

2b: add means from prior and likelihood

To find the posterior mean, we calculate a weighted average of means from prior and likelihood, where each weight, g, is just the fraction of information that each Gaussian provides!

gprior=information priorinformation posteriorglikelihood=information likelihoodinformation posteriorμ¯t=gpriorDμst1+glikelihoodmt